Search Results: "Mark Brown"

16 July 2010

Mark Brown: LPC 2010 submit your papers now!

As Lennart just posted the Call for Papers for the 2010 Linux Plumbers Conference (LPC)closes this Monday (the 19th of July). The goal of LPC is to get people working on the various projects that make up the key Linux infrastructure where the kernel and application layers meet together so that everyone understands everyone else s needs and all the components of the system can be made to work together as well as possible. This is the third year the conference has been run, with the previous two years having been very productive, and we hope that the event this year will be equally successful. This year I ll be joining Lennart Poettering in helping to organize the audio track, with my particular focus being on the needs of embedded systems. Embedded audio is currently undergoing a rapid evolution, especially for mobile phones, so there s a lot of exciting work to be done to make sure that the software stack can readily meet the needs of practical applications. Things like the move from analogue to digital audio routing within devices and the increasingly rich audio feature sets provided by systems like smartphones are driving rapid development in this area that has an impact over the full software stack from device drivers up to the application layer. If you are involved with implementing or deploying the Linux audio infrastructure please join us there to discuss the work that needs doing and help improve the Linux audio experience even further, and please also submit a proposal.

17 May 2010

Mark Brown: ASoC updates in 2.6.34

Linux 2.6.34 was released today. This contains a fairly substantial batch of ASoC updates, including:

27 February 2010

Mark Brown: ASoC updates in 2.6.33

This has been another fairly quiet release for ASoC. Aside from the addition of virtual mux support to DAPM and some further preparatory work for multi-CODEC cards the majority of changes have been driver updates, including:

24 December 2009

Mark Brown: EIFF 2009

The Edinburgh Film Festival finished a couple of weeks ago. As ever, I went along and saw a bunch of films and stage interviews. The programme had been a little disappointing, mostly due to feeling a little constricted some things that are usually present were dropped (Mirrorball being the most obvious example) and the late night movies weren t very late night, starting about 10:30 for the most part. That said, it was a lot of fun I was more successful than normal in avoiding duds and there were several things that really stood out. Two films I saw early on that really stood out were Moon and Exam, two low budget indie science fiction films. Moon has had an awful lot of publicity already so I won t repeat what other people have said about it. Exam is a very tight, taut thriller eight people in a room in the final test of a long interview process, one of whom will get the job. Both films looked great a testament to how affordable good CGI has become. I ll be interested to see what follows them now there s some examples of low budget SF out there, and I m not sure what to make of the fact that both of the films were British. Pontypool was also excellent, a zombie movie about the dangers, or possibily salvation, of talk radio. If you see it (which you should) make sure you stay for the end of the credits. Also good was Modern Love is Automatic. It s a low budget indie flick which reminded me an awful lot of The Unbelievable Truth partly in terms of visual design but more in the way it decided to just jump off and handle things in a totally non-naturalistic fashion. It s a really tricky thing to pull off without looking like you just don t care about the audience (witness a lot of experimental films) but it s very impressive when it works and it worked here. On the down side Dario Argento s Giallo had the audience laughing, and I m fairly sure it was an at laugh rather than a with laugh. There came a point in the film where it felt like they d just run out of enthusiasm for the whole thing and were just throwing anything on the screen to tie up the loose ends. Very disappointing at a film festival. That was the only real blip, though overall it was good though there was cost cutting in evidence.

9 December 2009

Mark Brown: Oh dear

Subject: zlib_1.2.3.3.dfsg-16_amd64.changes REJECTED
Reject Reasons:
lib32z1: lintian output: 'embedded-zlib ./usr/lib32/libz.so.1.2.3.3',
+automatically rejected package.
lib32z1: If you have a good reason, you may override this lintian tag.
I guess I should ve actually reported the lintian bug rather than just ignoring the bogus warning.

3 December 2009

Mark Brown: ASoC updates in 2.6.32

Linux 2.6.32 was released overnight. This has been a fairly busy release for ASoC, with changes including:

26 October 2009

Mark Brown: Setting up regulator consumers with dev_name

The Linux kernel regulator API requires that each system sets up the connections between the various voltage and current regulators in the system and the devices they supply, known as consumers within the regulator API. This is done using the struct device for the consumer device as the key for consumer access. This works well for things like platform devices which are generally always allocated at system startup but is not really usable with buses like I2C which only allocate the struct device late on during system startup. To help work with these buses Linux 2.6.32 will follow the clock API and allow the use of the dev_name() for the device instead. A new field dev_name has been added to struct regulator_consumer_supply which should be used instead of dev this is now the preferred mechanism. It is a simple string and so does not depend on any other initialization. For those backporting the relevant commit is 744ea1d0b59bf084f19559b8f199b644fbb0899c.

10 September 2009

Mark Brown: ASoC updates in 2.6.31

Linux 2.6.31 was released today. This was a fairly busy release for the ASoC subsystem, with updates including: Plus lots of updates and fixes to existing code, especially to the OMAP and TWL4030 drivers.

5 September 2009

Mark Brown: Chasing patches into Linux

One thing that often seems to cause problems for people who work over many different areas of the Linux kernel is the process of making sure that patches actually get reviewed and applied. Where the relevant subsystem is actively maintained it s not a problem but that s not always the case. Sometimes maintainers are busy or on holiday and miss things, sometimes there are other problems. In these cases the onus is on the patch submitter to spot the problem and make sure that something is done to ensure that the patch doesn t get forgotten. There s a few workflows for dealing with this. My preferred one is to track the appearance of my patches in Stephen Rothwell s linux-next tree, which tracks individual development trees destined for merge into Linus tree. I create a git branch based on the current state of this tree then apply the patches I m submitting on top of that. This lets me spot any potential merge conflicts that they ll create but the main function is to allow me to come back to the branch later and track which of the patches has shown up in one of the trees that Stephen tracks. To do this I rebase the branch onto the current state of linux-next:
git rebase --onto next/master old-master
where old-master is the last linux-next commit in the branch. This will flag up any merge issues that have come up due to changes in other trees and will also handle patches that are already present in one of the trees in -next by dropping my local version. The end result is a branch based on the new linux-next with all the patches that were not yet applied in it. I can see what still needs to be looked at by examining the log
git shortlog next/master..
and take any appropriate action, such as following up with the relevant maintainer or trying to find out what s going on with the subsytem if it looks like the subsystem maintainers are inactive. One possible problem with this approach is that a patch may be applied and then subsequently dropped this is rare but it can happen. I deal with that by also keeping a normal unrebased development branch whch has the changes in Linus tree merged into it periodically and incremental patches for any review updates that occur during the submission process. By looking at the diff between that and other trees I can see any changes that have got lost along the way.

14 March 2009

Mark Brown: etckeeping git

Working with multiple upstream kernel trees and keeping an eye on more my git repositories tend to end up with large numbers of remotes (this laptop has 22, for example). It s getting to the point where I need something like etckeeper to keep track of them all.

18 February 2009

Mark Brown: Buffer overflows ahoy

I may be wrong on this but it looks like Microsoft SMTP clients (at least Windows Mail and Outlook) don t like being sent a large volume of SSL certificate information when opening a TLS connection. They appear to assume that the data they are being sent is malformed and assume that STARTTLS failed, continuing with an unencrypted SMTP dialogue. This can be triggered relatively easily on a Debian system by telling Exim to use all the certificates provided by the ca-certificates package (which is the default configuration). The Windows clients will give an unhelpful the remote end dropped the connection style error, caused by the server getting upset by the unexpected fallback to unencrypted SMTP. The server logs will show something like this:
2009-02-17 21:32:55 TLS error on connection from client.example.com (Client) [192.168.192.168] (gnutls_handshake): A TLS packet with unexpected length was received.
2009-02-17 21:33:00 SMTP protocol synchronization error (input sent without waiting for greeting): rejected connection from H=client.example.com [192.168.192.168] input="EHLO Client\r\n"
Configuring the MAIN_VERIFY_TLS_CERTIFICATES option in the Debian Exim configuration (which sets the tls_verifiy_certificates option in the actual Exim configuration) to point to something with less certificates in should avoid the issue. On the bright side, at least they re making an effort to avoid overflows.

15 February 2009

Mark Brown: Upgrading crm114

When upgrading from older crm114 releases and trying to retain your existing configuration it is important to check that all the configuration options that the new version expects to be set have been set. While some will cause errors if they re omitted others will appear to work but will cause unwanted behaviour at runtime. For example, omitting good_threshold and spam_threshold will cause everything to be flagged as spam in X-CRM114-Status even though the classifier is working well. In practice there are relatively few configuration options that users are expected to configure so it may be easier to redo the configuration based on the example provided. For safety it s best to delete your existing CSS files too in case they ve been invalidated by a configuration or format change. It s all a bit manual but it s worth it for what it does for my inbox.

27 October 2008

Mark Brown: Getting kernel support

[This is a slight modification of something I posted to the alsa-devel list earlier today.] One of the biggest surprises that people starting to use Linux seem to run into is that you can’t rely on any particular support level from the community - everything is done on a voluntary basis and the responses will depend on a range of factors, including things like how busy the people involved are. People moving to Linux for reasons other than freedom, particularly those using it commercially, often don’t seem to notice this distinction. You can normally help getting a response by providing as much information as possible about your problem and the steps you have taken to resolve it - this makes it very much easier for people to reply since, for example, it’s more likely that something will jump out at them. These web pages contain some suggestions on the sorts of thing to do in your e-mail to help get the best response: How to ask questions the smart way
How to report bugs effectively (this is targeted at end users more than developers). With the kernel community it can also help to send direct copies of your mail to people who have worked on the relevant code since people may either miss postings on mailing lists (there is often a lot of traffic) or in some cases not be subscribed to the lists at all. This doesn’t apply to all free software projects - you should check the normal standards for a given project before doing this. If you need guaranteed responses or more detailed responses than you are able to obtain from the community the usual approach is to work with people with whom you have a commercial relationship - for example, your chip or software vendors, or consultants you have employed.

16 October 2008

Mark Brown: If we build it they will come

It looks like the jack reporting API for ALSA which just got merged into the mainline kernel for inclusion in 2.6.28 already has its first user - code from Matthew Ranostay supporting the jack detection in Sigmatel HDA codecs was just queued for merge in the next merge window. Admittedly, the jack reporting API has been available in ALSA git since July but it does look like it is just impressively fast adoption after the mainline merge. Still, a long way to go before user space can start to rely on knowing if things are plugged in or not.

26 September 2008

Mark Brown: Whats the standard Linux audio API?

Lennart Pottering’s post about the sound APIs available for Linux appears to have caused some consternation from people working with the modern out of tree OSS drivers who feel that the current, out of tree, OSS drivers are being unfairly maligned. This rather misses the point of his post. The fact that there are improved versions of the OSS code doesn’t really help developers who are trying to target current Linux distributions which only ship the old OSS drivers. From this point of view the new OSS drivers are probably best looked at as a completely different product. Joss is right, though - most applications should be working with a higher level user space API than either ALSA or OSS. One of the most obvious examples of this is in the embedded space where there are often vast numbers of controls that need to be exported in order to support the complex audio routing that devices like phones can have. Most of these should only be touched very occasionally when changing use case and should therefore be hidden from normal applications where they’re at best irrelevant and at worst confuse end users. They do, however, need to be exposed by the kernel in order to allow user space the flexibility to manage the audio configuration of the system at run time.

20 September 2008

Mark Brown: When was that, then?

More than once I’ve found myself figuring out when I did something by looking through the changelogs of the free software I was contributing to at the time. It’s rarely any good for specific dates but it works amusingly well to an approximation. And is, of course, a perfectly normal way to do this.

17 September 2008

Mark Brown: EIFF 2008

It’s been so long since the film festival that I keep on forgetting half the good films I saw there when talking to people about it, so for the record here’s a brief list of my personal highlights: As far as the EIFF moving to June goes… I’m not convinced. I didn’t notice any dramatic improvement in the quality of the programme and while it did avoid the rain that Edinburgh suffered in August there’s nothing quite like the atmosphere you get during the main festival.

12 September 2008

Mark Brown: The Linux kernel needs a case sensitive filesystem

The Linux kernel source relies on a case sensitive filesystem. If you attempt to get the sources via git this will manifest as an error along the lines of:
fatal: Entry 'include/linux/netfilter/xt_CONNMARK.h
There are several header files like this the names of which differ only in case and just can’t be represented on a filesystem that doesn’t distinguish between cases. This is most commonly seen by people attempting to build embedded Linux from Windows.

30 August 2008

Mark Brown: Apple Mail and format=flowed

There’s one thing that the Apple Mail client gets right which I’ve never seen anything else try to do - the way it formats messages. Most mail clients seem to offer plain text and HTML as user selectable options and do exactly what they’re told regardless of the content of the message. If HTML is enabled they always send a mail with both text/plain and text/html renditions of the message. Normally the plain text version is a fixed, 80 column version. This is wasteful of bandwidth, especially since very few users actually use any formatting at all, and means that mail programs that don’t do HTML have to treat the mails as though the fixed layout the sending system chooses is important even when it results in poor layout (for example, on mobile devices with small screens). What Apple Mail does here is to only enable the more complex formatting options if they add information that can’t be represented in the less complex formats. By default mails are sent in text/plain with the format=flowed option to let the reader know it can safely reflow the text and no HTML alternative is generated. If something that can’t be represented using format=flowed is included in the message then a HTML alternative is generated - transparently and without user intervention. This is good partly because it’s nice to see format=flowed used, it’s a nice technical solution to the problem, but mostly because it’s great user interface design. Most Apple Mail users will never notice if it is or isn’t generating HTML e-mail, they’ll just see that it’s doing what they expect and won’t have to deal with an option that they probably don’t understand or have much of a view on. Other users won’t be troubled with HTML generated by Apple Mail users unless there is some content in the formatting. It’d be good to see more MUAs implementing similar behavior, at least optionally.

26 August 2008

Mark Brown: Touching like spacemen

Rhonda, have you reported the SCons problems you’ve found to either the Debian mantainer or upstream? That’s much more likely to be an effective way of improving things than blogging about them. For what it’s worth the .scons files are a bug in the SCons core AFAICT (it needs a distclean equivalent that doesn’t appear to be there; I suspect nobody has asked for it before) and the failure to clean up other generated files will be bugs in the support for whatever tool is being used to do the build.

Next.

Previous.